The Smiling Chatbot by Konstantin Prinz

The Smiling Chatbot by Konstantin Prinz

Author:Konstantin Prinz
Language: eng
Format: epub
ISBN: 9783658400286
Publisher: Springer Fachmedien Wiesbaden


5.5 Study 4: Empathy Toward Chatbots and Personality-dependent Susceptibility4

5.5.1 Conceptual Development

In addition to the CASA paradigm, anthropomorphism plays a central role in the interaction with nonhuman entities. It has been intensively investigated in recent years, mainly in research on robotics (e.g., Eyssel, Hegel, Horstmann, & Wagner, 2010; Hegel, Krach, Kircher, Wrede, & Sagerer, 2008; Kim et al., 2013). According to Epley et al. (2007, p. 864), anthropomorphism is “[…] the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike [sic] characteristics, motivations, intentions, or emotions.” This definitional approach highlights an essential distinguishing feature of anthropomorphism compared to the CASA paradigm discussed above. Because the latter is considered an unconscious process that is denied on a conscious level, the tendency of anthropomorphism reaches more strongly in the direction of a conscious attribution. This notion is illustrated by a possible explanatory approach of Guthrie (1993). This approach assumes that people use the attribution of human-like features to explain new and complex aspects. As a well-known and tangible example, Guthrie (1993) cites religion, which in many cases uses a human-like god or several human-like gods to explain such complex facts as the origin of the earth.

With the arrival of digital agents in everyday life, the demonstrated human tendency to ascribe human attributes to artificial entities such as an affective state has opened a whole new research field. Especially scholars dealing with robotics or embodied agents started intensively investigating empathy in this context. Published research in this area mainly follows two research streams: (1) emphatic reactions by the agent (e.g., Li et al., 2017; Luo et al., 2019; Schneider et al., 2012) and (2) emphatic reactions toward the agent (e.g., Kwak et al., 2013; Riek et al., 2009b; Rosenthal-von der Pütten et al., 2013; Rosenthal-von der Pütten et al., 2014). For the further course of this thesis, especially the second stream, is of importance.

The extant research on whether artificial agents can cause empathy is, however, subject to a significant limitation, as it mostly builds on triggering empathy by inflicting pain to the agent (e.g., Paiva et al., 2004; Rosenthal-von der Pütten et al., 2013; Rosenthal-von der Pütten et al., 2014), although empathy in its basic understanding is not limited to negatively valenced emotions. Existing research thus underlies a clear limitation in the context of AI. For example, Rosenthal-von der Pütten et al. (2013) were able to show that subjects reported empathic concern for a robot that was being tortured. Furthermore, they figured that this effect happened irrespective of whether they had previously interacted with the robot. Shortly after the mentioned experiment, Rosenthal-von der Pütten et al. (2014) repeated their experiment, this time including not only a human-to-robot scenario but a human-to-human scenario as well. However, they could not unveil significant differences in empathy toward either the human or the robot. As their subjects’ brain activity was simultaneously scanned using fMRI, they found similar activation patterns across their used experimental conditions, regardless of participants being exposed to a human-to-human interaction or a human-to-robot interaction.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.